Current LLM can approximate a very low level of human interaction for a very brief period of time. The problem is that it has nothing (eg. morality or experience which are effectively feedback loops to what is acceptable) to anchor itself on, it literally just throws a ton of shit at the wall to see what sticks and unless properly guided, it will just spew out more and more random stuff.
Until we can define and encode what it is that makes up intelligence and morality, then we have nothing to fear from a self-aware AI. People will use AI to improve their effectiveness in many avenues, including killing other people, but that is just people doing what people do. As long as your gun isn’t controlled by someone else (eg. Smart triggers), you will be able to defend yourself.